Goto

Collaborating Authors

 on-line algorithm


On-line Learning of Dichotomies

Neural Information Processing Systems

The performance of on-line algorithms for learning dichotomies is studied. In on-line learn(cid:173) ing, the number of examples P is equivalent to the learning time, since each example is presented only once. The learning curve, or generalization error as a function of P, depends on the schedule at which the learning rate is lowered. For a target that is a perceptron rule, the learning curve of the perceptron algorithm can decrease as fast as p- 1, if the sched(cid:173) ule is optimized. If the target is not realizable by a perceptron, the perceptron algorithm does not generally converge to the solution with lowest generalization error.


On the Generalization Ability of On-Line Learning Algorithms

Neural Information Processing Systems

In this paper we show that on-line algorithms for classification and re- gression can be naturally used to obtain hypotheses with good data- dependent tail bounds on their risk. Our results are proven without re- quiring complicated concentration-of-measure arguments and they hold for arbitrary on-line learning algorithms. Furthermore, when applied to concrete on-line algorithms, our results yield tail bounds that in many cases are comparable or better than the best known bounds.


An On-Line Algorithm for Semantic Forgetting

Packer, Heather Stephanie (University of Southampton) | Gibbins, Nicholas (University of Southampton) | Jennings, Nicholas R (University of Southampton)

AAAI Conferences

In AI, this area Ontologies that evolve through use to support new has been studied under a variety of names such as forgetting domain tasks can grow extremely large. Moreover, and variable elimination [Eiter et al., 2006; Wang et al., large ontologies require more resources to use and 2008]. We provide a general approach for ranking knowledge have slower response times than small ones. To according to its use and cost, which can be applied to systems help address this problem, we present an online semantic that are limited by memory resources to evaluate memory forgetting algorithm that removes ontology allocation. We also provide a specific approach to select fragments containing infrequently used or cheap to which concepts to remove from an ontology, using the ranking.


Improved risk tail bounds for on-line algorithms

Cesa-bianchi, Nicolò, Gentile, Claudio

Neural Information Processing Systems

We prove the strongest known bound for the risk of hypotheses selected from the ensemble generated by running a learning algorithm incrementally on the training data. Our result is based on proof techniques that are remarkably different from the standard risk analysis based on uniform convergence arguments.


Improved risk tail bounds for on-line algorithms

Cesa-bianchi, Nicolò, Gentile, Claudio

Neural Information Processing Systems

We prove the strongest known bound for the risk of hypotheses selected from the ensemble generated by running a learning algorithm incrementally on the training data. Our result is based on proof techniques that are remarkably different from the standard risk analysis based on uniform convergence arguments.


Improved risk tail bounds for on-line algorithms

Cesa-bianchi, Nicolò, Gentile, Claudio

Neural Information Processing Systems

We prove the strongest known bound for the risk of hypotheses selected from the ensemble generated by running a learning algorithm incrementally onthe training data. Our result is based on proof techniques that are remarkably different from the standard risk analysis based on uniform convergence arguments.